Goto

Collaborating Authors

 network traffic data


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper introduces an efficient and scalable core tensor Schatten 1-norm minimization (CSNM) method for simultaneous tensor decomposition and completion. Q2: Please summarize your review in 1-2 sentences Section 5 is quite packed. First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: this paper presents a method to decompose and complete tensors.



A Comprehensive Survey on Network Traffic Synthesis: From Statistical Models to Deep Learning

Sivaroopan, Nirhoshan, Silva, Kaushitha, Madarasingha, Chamara, Dahanayaka, Thilini, Jourjon, Guillaume, Jayasumana, Anura, Thilakarathna, Kanchana

arXiv.org Artificial Intelligence

The limitations of the Poisson process were more evident when modeling high-speed network traffic, particularly real-time data traffic modeling for next-generation networks. For example, Liji et al. [85] demonstrated that the Stationary Poison Increment Process can only model Short Range Dependence (SRD) but not LRD. To address this limitation, the authors proposed using second-order self-similarity models, such as fractional Gaussian noise and fractional ARIMA processes, as a more appropriate approach. In the meantime, researchers also explored modeling data center network traffic using poisson processes. To better simulate realistic traffic in data center environments, the generation of flow-level network traffic matrices based on the poisson shot-noise model is proposed in [172]. By incorporating factors such as flow arrival rates, intra-rack traffic ratios, flow sizes and durations, the poisson shot-noise process offers a more accurate representation of traffic patterns in data centers. B. Weibull distribution As discussed earlier, the limitations of Poisson processes for modeling network traffic led to exploring other distributions. One such promising model was the Weibull distribution, mainly due to its flexibility to model both heavy and non-heavy tailed distributions [11].


Real-Time Network Traffic Forecasting with Missing Data: A Generative Model Approach

Deng, Lei, Xu, Wenhan, Li, Jingwei, Tsang, Danny H. K.

arXiv.org Artificial Intelligence

Real-time network traffic forecasting is crucial for network management and early resource allocation. Existing network traffic forecasting approaches operate under the assumption that the network traffic data is fully observed. However, in practical scenarios, the collected data are often incomplete due to various human and natural factors. In this paper, we propose a generative model approach for real-time network traffic forecasting with missing data. Firstly, we model the network traffic forecasting task as a tensor completion problem. Secondly, we incorporate a pre-trained generative model to achieve the low-rank structure commonly associated with tensor completion. The generative model effectively captures the intrinsic low-rank structure of network traffic data during pre-training and enables the mapping from a compact latent representation to the tensor space. Thirdly, rather than directly optimizing the high-dimensional tensor, we optimize its latent representation, which simplifies the optimization process and enables real-time forecasting. We also establish a theoretical recovery guarantee that quantifies the error bound of the proposed approach. Experiments on real-world datasets demonstrate that our approach achieves accurate network traffic forecasting within 100 ms, with a mean absolute error (MAE) below 0.002, as validated on the Abilene dataset.


Large Language Models powered Network Attack Detection: Architecture, Opportunities and Case Study

Zhang, Xinggong, Li, Qingyang, Tan, Yunpeng, Guo, Zongming, Zhang, Lei, Cui, Yong

arXiv.org Artificial Intelligence

Network attack detection is a pivotal technology to identify network anomaly and classify malicious traffic. Large Language Models (LLMs) are trained on a vast corpus of text, have amassed remarkable capabilities of context-understanding and commonsense knowledge. This has opened up a new door for network threat detection. Researchers have already initiated discussions regarding the application of LLMs on specific cyber-security tasks. Unfortunately, there is still a lack of comprehensive elaboration how to mine LLMs' potentials in network threat detections, as well as the opportunities and challenges. In this paper, we mainly focus on the classification of malicious traffic from the perspective of LLMs' capability. We present a holistic view of the architecture of LLM-powered network attack detection, including Pre-training, Fine-tuning, and Detection. Especially, by exploring the knowledge and capabilities of LLM, we identify three distinct roles LLM can act in network attack detection: \textit{Classifier, Encoder, and Predictor}. For each of them, the modeling paradigm, opportunities and challenges are elaborated. Finally, we present our design on LLM-powered DDoS detection as a case study. The proposed framework attains accurate detection on carpet bombing DDoS by exploiting LLMs' capabilities in contextual mining. The evaluation shows its efficacy, exhibiting a nearly $35$\% improvement compared to existing systems.


LLMs Have Rhythm: Fingerprinting Large Language Models Using Inter-Token Times and Network Traffic Analysis

Alhazbi, Saeif, Hussain, Ahmed Mohamed, Oligeri, Gabriele, Papadimitratos, Panos

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) become increasingly integrated into many technological ecosystems across various domains and industries, identifying which model is deployed or being interacted with is critical for the security and trustworthiness of the systems. Current verification methods typically rely on analyzing the generated output to determine the source model. However, these techniques are susceptible to adversarial attacks, operate in a post-hoc manner, and may require access to the model weights to inject a verifiable fingerprint. In this paper, we propose a novel passive and non-invasive fingerprinting technique that operates in real-time and remains effective even under encrypted network traffic conditions. Our method leverages the intrinsic autoregressive generation nature of language models, which generate text one token at a time based on all previously generated tokens, creating a unique temporal pattern like a rhythm or heartbeat that persists even when the output is streamed over a network. We find that measuring the Inter-Token Times (ITTs)-time intervals between consecutive tokens-can identify different language models with high accuracy. We develop a Deep Learning (DL) pipeline to capture these timing patterns using network traffic analysis and evaluate it on 16 Small Language Models (SLMs) and 10 proprietary LLMs across different deployment scenarios, including local host machine (GPU/CPU), Local Area Network (LAN), Remote Network, and Virtual Private Network (VPN). The experimental results confirm that our proposed technique is effective and maintains high accuracy even when tested in different network conditions. This work opens a new avenue for model identification in real-world scenarios and contributes to more secure and trustworthy language model deployment.


Beyond Text-to-SQL for IoT Defense: A Comprehensive Framework for Querying and Classifying IoT Threats

Pavlich, Ryan, Ebadi, Nima, Tarbell, Richard, Linares, Billy, Tan, Adrian, Humphreys, Rachael, Das, Jayanta Kumar, Ghandiparsi, Rambod, Haley, Hannah, George, Jerris, Slavin, Rocky, Choo, Kim-Kwang Raymond, Dietrich, Glenn, Rios, Anthony

arXiv.org Artificial Intelligence

Recognizing the promise of natural language interfaces to databases, prior studies have emphasized the development of text-to-SQL systems. While substantial progress has been made in this field, existing research has concentrated on generating SQL statements from text queries. The broader challenge, however, lies in inferring new information about the returned data. Our research makes two major contributions to address this gap. First, we introduce a novel Internet-of-Things (IoT) text-to-SQL dataset comprising 10,985 text-SQL pairs and 239,398 rows of network traffic activity. The dataset contains additional query types limited in prior text-to-SQL datasets, notably temporal-related queries. Our dataset is sourced from a smart building's IoT ecosystem exploring sensor read and network traffic data. Second, our dataset allows two-stage processing, where the returned data (network traffic) from a generated SQL can be categorized as malicious or not. Our results show that joint training to query and infer information about the data can improve overall text-to-SQL performance, nearly matching substantially larger models. We also show that current large language models (e.g., GPT3.5) struggle to infer new information about returned data, thus our dataset provides a novel test bed for integrating complex domain-specific reasoning into LLMs.


A Cutting-Edge Deep Learning Method For Enhancing IoT Security

Ansar, Nadia, Ansari, Mohammad Sadique, Sharique, Mohammad, Khatoon, Aamina, Malik, Md Abdul, Siddiqui, Md Munir

arXiv.org Artificial Intelligence

There have been significant issues given the IoT, with heterogeneity of billions of devices and with a large amount of data. This paper proposed an innovative design of the Internet of Things (IoT) Environment Intrusion Detection System (or IDS) using Deep Learning-integrated Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks. Our model, based on the CICIDS2017 dataset, achieved an accuracy of 99.52% in classifying network traffic as either benign or malicious. The real-time processing capability, scalability, and low false alarm rate in our model surpass some traditional IDS approaches and, therefore, prove successful for application in today's IoT networks. The development and the performance of the model, with possible applications that may extend to other related fields of adaptive learning techniques and cross-domain applicability, are discussed. The research involving deep learning for IoT cybersecurity offers a potent solution for significantly improving network security.


Novel Approach to Intrusion Detection: Introducing GAN-MSCNN-BILSTM with LIME Predictions

Benchama, Asmaa, Zebbara, Khalid

arXiv.org Artificial Intelligence

This paper introduces an innovative intrusion detection system that harnesses Generative Adversarial Networks (GANs), Multi-Scale Convolutional Neural Networks (MSCNNs), and Bidirectional Long Short-Term Memory (BiLSTM) networks, supplemented by Local Interpretable Model-Agnostic Explanations (LIME) for interpretability. Employing a GAN, the system generates realistic network traffic data, encompassing both normal and attack patterns. This synthesized data is then fed into an MSCNN-BiLSTM architecture for intrusion detection. The MSCNN layer extracts features from the network traffic data at different scales, while the BiLSTM layer captures temporal dependencies within the traffic sequences. Integration of LIME allows for explaining the model's decisions. Evaluation on the Hogzilla dataset, a standard benchmark, showcases an impressive accuracy of 99.16\% for multi-class classification and 99.10\% for binary classification, while ensuring interpretability through LIME. This fusion of deep learning and interpretability presents a promising avenue for enhancing intrusion detection systems by improving transparency and decision support in network security.


Using Graph Theory for Improving Machine Learning-based Detection of Cyber Attacks

Zonneveld, Giacomo, Principi, Lorenzo, Baldi, Marco

arXiv.org Artificial Intelligence

Early detection of network intrusions and cyber threats is one of the main pillars of cybersecurity. One of the most effective approaches for this purpose is to analyze network traffic with the help of artificial intelligence algorithms, with the aim of detecting the possible presence of an attacker by distinguishing it from a legitimate user. This is commonly done by collecting the traffic exchanged between terminals in a network and analyzing it on a per-packet or per-connection basis. In this paper, we propose instead to perform pre-processing of network traffic under analysis with the aim of extracting some new metrics on which we can perform more efficient detection and overcome some limitations of classical approaches. These new metrics are based on graph theory, and consider the network as a whole, rather than focusing on individual packets or connections. Our approach is validated through experiments performed on publicly available data sets, from which it results that it can not only overcome some of the limitations of classical approaches, but also achieve a better detection capability of cyber threats.